76 research outputs found

    Extracting and Re-rendering Structured Auditory Scenes from Field Recordings

    Get PDF
    International audienceWe present an approach to automatically extract and re-render a structured auditory scene from field recordings obtained with a small set of microphones, freely positioned in the environment. From the recordings and the calibrated position of the microphones, the 3D location of various auditory events can be estimated together with their corresponding content. This structured description is reproduction-setup independent. We propose solutions to classify foreground, well-localized sounds and more diffuse background ambiance and adapt our rendering strategy accordingly. Warping the original recordings during playback allows for simulating smooth changes in the listening point or position of sources. Comparisons to reference binaural and B-format recordings show that our approach achieves good spatial rendering while remaining independent of the reproduction setup and offering extended authoring capabilities

    Efficient 3D Audio Processing on the GPU

    Get PDF
    International audienceAudio processing applications are among the most computeintensive and often rely on additional DSP resources for realtime performance. However, programmable audio DSPs are in general only available to product developers. Professional audio boards with multiple DSPs usually support specific effects and products while consumer "game-audio" hardware still only implements fixed-function pipelines which evolve at a rather slow pace. The widespread availability and increasing processing power of GPUs could offer an alternative solution. GPU features, like multiply-accumulate instructions or multiple execution units, are similar to those of most DSPs [3]. Besides, 3D audio rendering applications require a significant number of geometric calculations, which are a perfect fit for the GPU. Our feasibility study investigates the use of GPUs for efficient audio processing

    ECHO & NarSYS - An accoustic modeler and sound renderer

    Get PDF
    International audienceComputer graphics simulations are now widely used in the field of environmental modelling, for example to evaluate the visual impact of an architectural project on its environment and interactively change its design. Realistic sound simulation is equally important for environmental modelling. At iMAGIS, a joint project of INRIA, CNRS, Joseph Fourier University and the Institut National Polytechnique of Grenoble, we are currently developing an integrated interactive acoustic modelling and sound rendering system for virtual environments. The aim of the system is to provide an interactive simulation of global sound propagation in a given environment and an integrated sound/computer graphics rendering to obtain computer simulated movies of the environment with realistic and coherent soundtracks

    Soundtracks for Computer Animation : Sound Rendering in Dynamic Environments with Occlusions

    Get PDF
    International audienceWith the development of virtual reality systems and multi-modal simulations, soundtrack generation is becoming an important issue in computer graphics. In the context of computer generated animation, many more parameters than the sole object geometry as well as specific events can be used to generate, control and render a soundtrack that fits the object motions. Producing a convincing soundtrack involves the rendering of the interactions of sound with the dynamic environment : in particular sound reflections and sound absorption due to partial occlusions, usually implying an unacceptable computational cost. We present an integrated approach to sound and image rendering in a computer animation context, which allows the animator to recreate the process of sound recording, while ``physical effects" are automatically computed. Moreover, our sound rendering process efficiently combines a sound reflection model and an attenuation model due to scattering/diffraction by partial occluders, through the use of graphics hardware allowing interactive computation rates.Avec le développement des systèmes de réalité virtuelle et des simulations multi-modales, générer une bande son devient un problème qui touche de plus en plus le milieu de l'informatique graphique. Dans le contexte d'une animation de synthèse, de nombreux paramètres et évènements peuvent être utilisés pour générer, contrôler et effectuer le rendu d'une bande son accompagnant l'action. Produire une bande son convaincante implique toutefois le rendu des interactions du son avec l'environnement dynamique, en particulier ses multiples réflections et l'absorption due aux occlusions partielles, nécessitant habituellement des calculs couteux. Nous présentons une approche pour le rendu simultané du son et de l'image dans un contexte d'animation de synthèse permettant à l'animateur de recréer un processus de prise de son virtuelle, les calculs d'effets étant réalisés automatiquement. Notre processus de rendu permet de prendre en compte, en temps interactif, les réflections du son ainsi qu'un modèle de dispersion/diffraction due aux occlusions partielles en utilisant le hardware spécialisé des stations grahiques

    Un modeleur interactif d'objets définis par des surfaces implicites

    Get PDF
    Published under the name Marie-Paule GascuelNational audienceThis article presents a modeling tool for interactive design of objects defined by implicit surfaces. Based on a new sampling, it offers to complementary real-time visualizing modes and allows the user to control the defined objects in a very simple way, by interactively manipulating their skeletons and the associated field functions.Cet article présente un outil de modélisation interactive d'objets définis par des surfaces implicites engendrées par des squelettes. Basé sur une nouvelle technique de discrétisation adaptative, il offre deux modes complémentaires de visualisation temps réel, et permet un contrôle simple des objets via la manipulation interactive de leur squelettes et des fonctions potentiel associées

    Fast Modal Sounds with Scalable Frequency-Domain Synthesis

    Get PDF
    International audienceAudio rendering of impact sounds, such as those caused by falling objects or explosion debris, adds realism to interactive 3D audiovisual applications, and can be convincingly achieved using modal sound synthesis. Unfortunately, mode-based computations can become prohibitively expensive when many objects, each with many modes, are impacted simultaneously. We introduce a fast sound synthesis approach, based on short-time Fourier Tranforms, that exploits the inherent sparsity of modal sounds in the frequency domain. For our test scenes, this "fast mode summation" can give speedups of 5-8 times compared to a time-domain solution, with slight degradation in quality. We discuss different reconstruction windows, affecting the quality of impact sound "attacks". Our Fourier-domain processing method allows us to introduce a scalable, real-time, audio processing pipeline for both recorded and modal sounds, with auditory masking and sound source clustering. To avoid abrupt computation peaks, such as during the simultaneous impacts of an explosion, we use crossmodal perception results on audiovisual synchrony to effect temporal scheduling. We also conducted a pilot perceptual user evaluation of our method. Our implementation results show that we can treat complex audiovisual scenes in real time with high quality

    Prioritizing signals for selective real-time audio processing

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)This paper studies various priority metrics that can be used to progressively select sub-parts of a number of audio signals for realtime processing. In particular, five level-related metrics were examined: RMS level, A-weighted level, Zwicker and Moore loudness models and a masking threshold-based model. We conducted a pilot subjective evaluation study aimed at evaluating which metric would perform best at reconstructing mixtures of various types (speech, ambient and music) using only a budget amount of original audio data. Our results suggest that A-weighting performs the worst while results obtained with loudness metrics appear to depend on the type of signals. RMS level offers a good compromise for all cases. Our results also show that significant sub-parts of the original audio data can be omitted in most cases, without noticeable degradation in the generated mixtures, which validates the usability of our selective processing approach for real-time applications. In this context, we successfully implemented a prototype 3D audio rendering pipeline using our selective approach

    Topological Sound Propagation with Reverberation Graphs

    Get PDF
    International audienceReverberation graphs is a novel approach to estimate global soundpressure decay and auralize corresponding reverberation effects in interactive virtual environments. We use a 3D model to represent the geometry of the environment explicitly, and we subdivide it into a series of coupled spaces connected by portals. Off-line geometrical-acoustics techniques are used to precompute transport operators, which encode pressure decay characteristics within each space and between coupling interfaces. At run-time, during an interactive simulation, we traverse the adjacency graph corresponding to the spatial subdivision of the environment. We combine transport operators along different sound propagation routes to estimate the pressure decay envelopes from sources to the listener. Our approach compares well with off-line geometrical techniques, but computes reverberation decay envelopes at interactive rates, ranging from 12 to 100 Hz. We propose a scalable artificial reverberator that uses these decay envelopes to auralize reverberation effects, including room coupling. Our complete system can render as many as 30 simultaneous sources in large dynamic virtual environments

    Breaking the 64 spatialized sources barrier

    Get PDF
    International audienceSpatialized soundtracks and sound-effects are standard elements of today's video games. However, although 3D audio modeling and content creation tools (e.g., Creative Lab's EAGLE [4]) provide some help to game audio designers, the number of available 3D audio hardware channels remains limited, usually ranging from 16 to 64 in the best case. While one can wonder whether more hardware channels are actually required, it is clear that large numbers of spatialized sources might be needed to render a realistic environment. This problem becomes even more significant if extended sound sources are to be simulated: think of a train for instance, which is far too long to represented as a point source. Since current hardware and APIs implement only point-source models or limited extended source models [2,3,5], a large number of such sources would be required to achieve a realistic effect (view Example1). Finally, 3D-audio channels might also be used for restitution-independent representation of surround music tracks, leaving the generation of the final mix to the audio rendering API but requiring the programmer to assign some of the precious 3D channels to the soundtrack. Also, dynamic allocation schemes currently available in game APIs (e.g. Direct Sound 3D [2]) remain very basic. As a result, game audio designers and developers have to spend a lot of effort to best-map the potentially large number of sources to the limited number of channels. In this paper, we provide some answers to this problem by reviewing and introducing several automatic techniques to achieve efficient hardware mapping of complex dynamic audio scenes in the context of currently available hardware resources

    Réalité virtuelle : Simulation sonore en trois dimensions par Nicolas Tsingos. Phobies à l'épreuve du virtuel, entretien avec Isabelle Viaud-Delmon ; propos recueillis par Dominique Chouchan. Le jeu vidéo est un produit de très haute technologie par David Alloza

    Get PDF
    National audienceEn synthèse d'images, on s'intéresse à la simulation des formes, des couleurs, des ombres, de l'éclairage... La sonorisation d'un monde synthétique est tout aussi exigeante si l'on veut qu'elle reflète notre perception spatiale des sons
    • …
    corecore